Skip to content

Conversation

suryavanshi
Copy link
Collaborator

Added Flask blueprint to handle sentiment prediction request. Added sentiment server to read text from TEXT_QUEUE. Use as below -
curl -X POST
http://127.0.0.1:3031/sentimentV1/predict
-H 'Cache-Control: no-cache'
-H 'Postman-Token: eeedb319-2218-44b9-86eb-63a3a1f62e14'
-H 'content-type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW'
-F textv='the movie is good'
-F model_name=base

Copy link
Contributor

@hundredblocks hundredblocks left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly minor comments. Only additional questions I have is:

  • Have you tested that all prior endpoints still work (for images)
  • Could you add an updated README to your PR that describes how to use the sentiment API?

transfer the topless InceptionV3 model
to classify new classes
"""
print "Inside Transfer model"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need this. Seems like an artifact of your debugging :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, I will remove this and other debugging related print statements

batch_size = int(batch_size)


print "nb_val_samples:{}".format(nb_val_samples)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need this either, especially if we want to move to Python 3 down the line


def __get_nb_files(self, directory):
"""Get number of files by searching local dir recursively"""
logging.info("Inside __get_nb_files")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above



if textIDs:
print("* Predicting for {} of Models".format(len(textIDs.keys())))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should either use print everywhere or logging everywhere (I vote logging)

print("* Predicting for {} of Models".format(len(textIDs.keys())))
print("* Number of Sentences: {}".format(num_text))

r = {"positive":0.5, "negative":0.5}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is r for?

INCEPTIONV3_IMAGE_QUEUE = app.config['INCEPTIONV3_IMAGE_QUEUE']
INCEPTIONV3_TOPLESS_MODEL_PATH = app.config['INCEPTIONV3_TOPLESS_MODEL_PATH']

SENTIMENT_TEXT_QUEUE = app.config['SENTIMENT_TEXT_QUEUE'] #Added by MS on 22-Jan-2019
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you need that comment

# init the transfer learning manager
this_IV3_transfer = inceptionV3_transfer_retraining.InceptionTransferLeaner(model_name)
new_model, label_dict, history = this_IV3_transfer.transfer_model(image_data_path,
print "Done loading model"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

log instead of pring

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants